Solving partial differential equations is difficult. Recently proposed neural resolution-invariant models, despite their effectiveness and efficiency, usually require equispaced spatial points of data. However, sampling in spatial domain is sometimes inevitably non-equispaced in real-world systems, limiting their applicability. In this paper, we propose a Non-equispaced Fourier PDE Solver (\textsc{NFS}) with adaptive interpolation on resampled equispaced points and a variant of Fourier Neural Operators as its components. Experimental results on complex PDEs demonstrate its advantages in accuracy and efficiency. Compared with the spatially-equispaced benchmark methods, it achieves superior performance with $42.85\%$ improvements on MAE, and is able to handle non-equispaced data with a tiny loss of accuracy. Besides, to our best knowledge, \textsc{NFS} is the first ML-based method with mesh invariant inference ability to successfully model turbulent flows in non-equispaced scenarios, with a minor deviation of the error on unseen spatial points.
translated by 谷歌翻译
In this work, we present a new computer vision task named video object of interest segmentation (VOIS). Given a video and a target image of interest, our objective is to simultaneously segment and track all objects in the video that are relevant to the target image. This problem combines the traditional video object segmentation task with an additional image indicating the content that users are concerned with. Since no existing dataset is perfectly suitable for this new task, we specifically construct a large-scale dataset called LiveVideos, which contains 2418 pairs of target images and live videos with instance-level annotations. In addition, we propose a transformer-based method for this task. We revisit Swin Transformer and design a dual-path structure to fuse video and image features. Then, a transformer decoder is employed to generate object proposals for segmentation and tracking from the fused features. Extensive experiments on LiveVideos dataset show the superiority of our proposed method.
translated by 谷歌翻译
Offline reinforcement learning (RL) enables the agent to effectively learn from logged data, which significantly extends the applicability of RL algorithms in real-world scenarios where exploration can be expensive or unsafe. Previous works have shown that extracting primitive skills from the recurring and temporally extended structures in the logged data yields better learning. However, these methods suffer greatly when the primitives have limited representation ability to recover the original policy space, especially in offline settings. In this paper, we give a quantitative characterization of the performance of offline hierarchical learning and highlight the importance of learning lossless primitives. To this end, we propose to use a \emph{flow}-based structure as the representation for low-level policies. This allows us to represent the behaviors in the dataset faithfully while keeping the expression ability to recover the whole policy space. We show that such lossless primitives can drastically improve the performance of hierarchical policies. The experimental results and extensive ablation studies on the standard D4RL benchmark show that our method has a good representation ability for policies and achieves superior performance in most tasks.
translated by 谷歌翻译
Current computer vision models, unlike the human visual system, cannot yet achieve general-purpose visual understanding. Existing efforts to create a general vision model are limited in the scope of assessed tasks and offer no overarching framework to perform them holistically. We present a new comprehensive benchmark, General-purpose Visual Understanding Evaluation (G-VUE), covering the full spectrum of visual cognitive abilities with four functional domains $\unicode{x2014}$ Perceive, Ground, Reason, and Act. The four domains are embodied in 11 carefully curated tasks, from 3D reconstruction to visual reasoning and manipulation. Along with the benchmark, we provide a general encoder-decoder framework to allow for the evaluation of arbitrary visual representation on all 11 tasks. We evaluate various pre-trained visual representations with our framework and observe that (1) Transformer-based visual backbone generally outperforms CNN-based backbone on G-VUE, (2) visual representations from vision-language pre-training are superior to those with vision-only pre-training across visual tasks. With G-VUE, we provide a holistic evaluation standard to motivate research toward building general-purpose visual systems via obtaining more general-purpose visual representations.
translated by 谷歌翻译
Pre-trained vision-language models like CLIP have recently shown superior performances on various downstream tasks, including image classification and segmentation. However, in fine-grained image re-identification (ReID), the labels are indexes, lacking concrete text descriptions. Therefore, it remains to be determined how such models could be applied to these tasks. This paper first finds out that simply fine-tuning the visual model initialized by the image encoder in CLIP, has already obtained competitive performances in various ReID tasks. Then we propose a two-stage strategy to facilitate a better visual representation. The key idea is to fully exploit the cross-modal description ability in CLIP through a set of learnable text tokens for each ID and give them to the text encoder to form ambiguous descriptions. In the first training stage, image and text encoders from CLIP keep fixed, and only the text tokens are optimized from scratch by the contrastive loss computed within a batch. In the second stage, the ID-specific text tokens and their encoder become static, providing constraints for fine-tuning the image encoder. With the help of the designed loss in the downstream task, the image encoder is able to represent data as vectors in the feature embedding accurately. The effectiveness of the proposed strategy is validated on several datasets for the person or vehicle ReID tasks. Code is available at https://github.com/Syliz517/CLIP-ReID.
translated by 谷歌翻译
Perceiving and manipulating objects in a generalizable way has been actively studied by the computer vision and robotics communities, where cross-category generalizable manipulation skills are highly desired yet underexplored. In this work, we propose to learn such generalizable perception and manipulation via Generalizable and Actionable Parts (GAParts). By identifying and defining 9 GAPart classes (e.g. buttons, handles, etc), we show that our part-centric approach allows our method to learn object perception and manipulation skills from seen object categories and directly generalize to unseen categories. Following the GAPart definition, we construct a large-scale part-centric interactive dataset, GAPartNet, where rich, part-level annotations (semantics, poses) are provided for 1166 objects and 8489 part instances. Based on GAPartNet, we investigate three cross-category tasks: part segmentation, part pose estimation, and part-based object manipulation. Given the large domain gaps between seen and unseen object categories, we propose a strong 3D segmentation method from the perspective of domain generalization by integrating adversarial learning techniques. Our method outperforms all existing methods by a large margin, no matter on seen or unseen categories. Furthermore, with part segmentation and pose estimation results, we leverage the GAPart pose definition to design part-based manipulation heuristics that can generalize well to unseen object categories in both simulation and real world. The dataset and code will be released.
translated by 谷歌翻译
Since the recent success of Vision Transformers (ViTs), explorations toward transformer-style architectures have triggered the resurgence of modern ConvNets. In this work, we explore the representation ability of DNNs through the lens of interaction complexities. We empirically show that interaction complexity is an overlooked but essential indicator for visual recognition. Accordingly, a new family of efficient ConvNets, named MogaNet, is presented to pursue informative context mining in pure ConvNet-based models, with preferable complexity-performance trade-offs. In MogaNet, interactions across multiple complexities are facilitated and contextualized by leveraging two specially designed aggregation blocks in both spatial and channel interaction spaces. Extensive studies are conducted on ImageNet classification, COCO object detection, and ADE20K semantic segmentation tasks. The results demonstrate that our MogaNet establishes new state-of-the-art over other popular methods in mainstream scenarios and all model scales. Typically, the lightweight MogaNet-T achieves 80.0\% top-1 accuracy with only 1.44G FLOPs using a refined training setup on ImageNet-1K, surpassing ParC-Net-S by 1.4\% accuracy but saving 59\% (2.04G) FLOPs.
translated by 谷歌翻译
从单眼视频中进行的3D人姿势估计最近看到了显着改善。但是,大多数最先进的方法都是基于运动学的,它容易出现具有明显伪影的物理上不可信的运动。当前基于动态的方法可以预测物理上合理的运动,但仅限于具有静态相机视图的简单场景。在这项工作中,我们介绍了D&D(从动态相机中学习人类动力学),该法律利用物理定律使用移动的摄像机从野外视频中重建3D人类运动。 D&D引入了惯性力控制(IFC),以考虑动态摄像机的惯性力来解释非惯性局部框架中的3D人运动。为了学习有限注释的接地接触,我们开发了概率接触扭矩(PCT),该概率是通过与接触概率的可区分抽样计算的,并用于生成运动。接触状态可以通过鼓励模型产生正确的动作来弱监督。此外,我们提出了一个细心的PD控制器,该控制器使用时间信息来调整目标姿势状态,以获得平稳而准确的姿势控制。我们的方法完全是基于神经的,并且在物理引擎中没有离线优化或模拟的情况下运行。大规模3D人体运动基准的实验证明了D&D的有效性,在该基于最新的运动学基于动力学和基于动力学的方法的情况下,我们表现出卓越的性能。代码可从https://github.com/jeffsjtu/dnd获得
translated by 谷歌翻译
最近的研究表明,即使在攻击者无法访问模型信息的黑匣子场景中,基于深模型的检测器也容易受到对抗示例的影响。大多数现有的攻击方法旨在最大程度地减少真正的积极速率,这通常显示出较差的攻击性能,因为在受攻击的边界框中可以检测到另一个最佳的边界框成为新的真实积极的框架。为了解决这一挑战,我们建议最大程度地降低真实的正速率并最大化误报率,这可以鼓励更多的假阳性对象阻止新的真实正面边界框的产生。它被建模为多目标优化(MOP)问题,通用算法可以搜索帕累托最佳选择。但是,我们的任务具有超过200万个决策变量,导致搜索效率较低。因此,我们将标准的遗传算法扩展到了随机子集选择和称为GARSDC的分裂和矛盾,从而显着提高了效率。此外,为了减轻通用算法中人口质量的敏感性,我们利用具有相似骨架的不同检测器之间的可转移性产生了梯度优先人口。与最先进的攻击方法相比,GARSDC在地图中平均减少12.0,在广泛的实验中查询约1000倍。我们的代码可以在https://github.com/liangsiyuan21/ garsdc找到。
translated by 谷歌翻译
随着计算机视觉中深神经网络的显着进展,广泛研究了数据混合技术,以减轻培训数据量有限时降解概括的问题。但是,当前视觉工具箱中的混合策略尚未得到很好的组装。在本文中,我们建议\ texttt {OpenMixup},这是一个开放源代码的多合一工具箱,用于使用混音,用于监督,半手术和自我监督的视觉表示学习。它提供了一个集成的模型设计和培训平台,包括一系列主要的网络体系结构和模块,数据混合增强方法的集合以及实用的模型分析工具。此外,我们还在各种数据集上提供标准的混合图像分类基准,这加快了从业者在同一设置下的最新方法中进行公平比较。源代码和用户文档可在\ url {https://github.com/westlake-ai/openmixup}上获得。
translated by 谷歌翻译